-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Runtime hook for breadth execution patterns #5425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
end | ||
|
||
@dataloader.run | ||
GraphQL::Execution::Interpreter::Resolve.resolve_each_depth(@lazies_at_depth, @dataloader) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the actual prototype, I’m running dataloader inline with the field but this lazy step runs out-of-band after a bunch of fields have been queued. Sounds like with #5422, I’d just run dataloader once in that out-of-band position?
|
||
def evaluate_breadth_selection(objects, parent_type, node) | ||
result_key = node.alias || node.name | ||
@breadth_results_by_key[result_key] = Array.new(objects.size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the actual prototype, breadth results are keyed with a unique representation of a field path.
dd72284
to
a221c9a
Compare
I'm game to merge this as-is with the caveat that the integration point for this customization will change in the near future. I expect to continue modifying execution code with ideas from #5389, specifically using a queue instead of recursive method calls and merging |
This is an interesting prospect. What's the grain of the queued work? Are you queuing an object+field at a time, then having the ejected result feed the next enqueued generation? If so, this sounds like an excellent design for a formal integration. If a breadth executor could enqueue a whole bunch of values for a field and get all the inner results back out, that gets us exactly what we need. |
I thoroughly support this, as it amounts to a substantial reduction in allocations. Any allocation that matches response objects (or worse, fields) at a 1:1 standard should be considered a premium. In Cardinal, I believe the only 1:1 response allocation we do is the response tree itself, which is unavoidable. |
This adds a hook into the execution runtime that allows an evaluated selection to exit with its resolved inner value. This allows a breadth-first execution process to run a single generation of resolvers at a time across a set of objects, ex:
This general breadth capability relies on two runtime methods:
evaluate_selection(result_key, ast_nodes, selections_result)
exit_with_inner_result?(inner_result, result_name, selection_result)
The expected capabilities are covered by a simple example of a breadth-based runtime in tests. This is still an experimental workflow, so it seems worthwhile to keep the new code footprint minimal for the time being (just
exit_with_inner_result?
) and in a state that can be backed out easily.In the future, we'd ideally formalize a structure like this
SimpleBreadthRuntime
as a supported library feature. The actual breadth prototype I have in the works uses this same basic design while caching breadth objects across fields and consolidating the lazy hooks across scopes. It appears capable of passing some existing test suites while running slightly faster than native depth execution.